N.B. you will need to have the skimage librabry installed to run this. For this reason, I have separated out the cctbx and normal python bits. Once (if?) cctbx sorts out it's build system, we can do this all in one go.

In [1]:
%matplotlib inline
from __future__ import division

Segmentation for beamstop

Liberally adapted from http://scikit-image.org/docs/dev/auto_examples/applications/plot_coins_segmentation.html#example-applications-plot-coins-segmentation-py

n.b throwing out the top 0.1% of bright pixels is really important! (line 13)

In [5]:
import numpy as np
import matplotlib.pyplot as plt
from cPickle import load
from scipy import misc
from scipy import ndimage as ndi
import os

percentile_clipping = 99.9
data_source = 'test_data_numpy'
files = os.listdir(data_source)
images = []
hists = []

for f in files:
    im = load(open(os.path.join(data_source, f), 'rb'))
    t_val = np.percentile(im, percentile_clipping)
    print "{}: clipping at 99.9%ile: {}".format(f, t_val)
    im = im.clip(0, t_val) # Throw out the top 1% of bright spots
    hists.append(np.histogram(im, bins=np.arange(0, t_val, 5)))
    images.append(im)
    
numpy-shot-s00-C5_0_00001_20140517135337549.pickle: clipping at 99.9%ile: 174.0
numpy-shot-s00-G5_0_00005_20140520132634826.pickle: clipping at 99.9%ile: 145.0
numpy-shot-s00-K7_0_00006_20140518123118514.pickle: clipping at 99.9%ile: 120.0
numpy-shot-s00-K7_0_00010_20140518123304533.pickle: clipping at 99.9%ile: 130.0
numpy-shot-s01-C5_0_00003_20140517135536896.pickle: clipping at 99.9%ile: 210.0
numpy-shot-s01-G5_0_00002_20140520132428047.pickle: clipping at 99.9%ile: 120.0
numpy-shot-s01-K7_0_00001_20140518122757591.pickle: clipping at 99.9%ile: 133.0
numpy-shot-s01-K7_0_00012_20140518123351620.pickle: clipping at 99.9%ile: 136.0
numpy-shot-s02-A1_0_00002_20140518141054243.pickle: clipping at 99.9%ile: 93.0
numpy-shot-s02-A1_0_00007_20140518141402260.pickle: clipping at 99.9%ile: 59.0
numpy-shot-s02-C5_0_00002_20140517135445508.pickle: clipping at 99.9%ile: 171.0
numpy-shot-s02-C5_0_00007_20140517140034704.pickle: clipping at 99.9%ile: 189.0
numpy-shot-s02-G5_0_00004_20140520132536483.pickle: clipping at 99.9%ile: 173.0
numpy-shot-s02-K7_0_00005_20140518123056243.pickle: clipping at 99.9%ile: 127.0
numpy-shot-s02-K7_0_00007_20140518123155295.pickle: clipping at 99.9%ile: 116.0
numpy-shot-s03-A1_0_00004_20140518141229980.pickle: clipping at 99.9%ile: 103.0
numpy-shot-s03-C5_0_00006_20140517135953983.pickle: clipping at 99.9%ile: 160.0
numpy-shot-s03-G5_0_00006_20140520132708855.pickle: clipping at 99.9%ile: 147.0
numpy-shot-s03-G5_0_00007_20140520132736091.pickle: clipping at 99.9%ile: 174.0
numpy-shot-s03-K7_0_00004_20140518123024708.pickle: clipping at 99.9%ile: 89.0
numpy-shot-s03-K7_0_00011_20140518123329745.pickle: clipping at 99.9%ile: 132.0
numpy-shot-s04-C5_0_00004_20140517135712325.pickle: clipping at 99.9%ile: 205.0
numpy-shot-s04-G5_0_00001_20140520132332955.pickle: clipping at 99.9%ile: 80.0
numpy-shot-s04-K7_0_00003_20140518122937674.pickle: clipping at 99.9%ile: 168.0
numpy-shot-s04-K7_0_00008_20140518123220006.pickle: clipping at 99.9%ile: 118.0
numpy-shot-s05-A1_0_00003_20140518141155114.pickle: clipping at 99.9%ile: 83.0
numpy-shot-s05-C5_0_00005_20140517135831653.pickle: clipping at 99.9%ile: 201.0
numpy-shot-s05-D7_0_00003_20140517154628366.pickle: clipping at 99.9%ile: 118.0
numpy-shot-s05-G5_0_00003_20140520132507314.pickle: clipping at 99.9%ile: 159.0
numpy-shot-s05-K7_0_00002_20140518122849699.pickle: clipping at 99.9%ile: 116.0
numpy-shot-s05-K7_0_00009_20140518123241666.pickle: clipping at 99.9%ile: 123.0

Look at images.

In [6]:
n_per_row = 4

# 31 images: with 4/row that's 8 rows, so twice in y as in x: 40x20 then.

def get_fig_axes(n_imgs):
    fig, axes = plt.subplots(n_imgs // n_per_row + 1, n_per_row, figsize=(20, 40)) # Add one so we ceiling divide.
    return fig, axes

_, axes = get_fig_axes(len(images))

for n, im in enumerate(images):
    ax = axes[n // n_per_row, n % n_per_row]
    ax.imshow(im, cmap=plt.cm.gray, interpolation='nearest')
    ax.axis('off')
    

Find if there is a suitable common threshold for the beamstop

In [7]:
fig, ax = plt.subplots(figsize=(8, 8))

for hist in hists:
    ax.plot(hist[1][:-1], hist[0], lw=2)

# Parameters for image processing:
thresh = 15
bg = 35

ax.vlines(thresh, 0, 0.4e7)
ax.vlines(bg, 0, 0.4e7)
Out[7]:
<matplotlib.collections.LineCollection at 0x1fadc1450>

Looks like all the images have a peak around 15 ADUs. Will try this as a threshold. Most pixels are above 35, so use this for 'high'.

A note on visualization:

I am mainly using contour plots. These put a yellow line wherever the mask has a boundary between 0 (not beamstop) and 1 (beamstop). Ideally, we want a single line including all the beamstop, and nothing else.

Simple Thresholding

In [33]:
_, axes = get_fig_axes(len(images))

for n, im in enumerate(images):
    ax = axes[n // n_per_row, n % n_per_row]
    ax.imshow(img_data, cmap=plt.cm.gray, interpolation='nearest')
    ax.contour(img_data < thresh, [0.5], linewidths=1.2, colors='y')
    ax.set_title('pixels < {}'.format(thresh))
    ax.axis('off')
    

So this is not quite including enough of the beamstop. There are also lots of 'unfilled' bits inside the beamstop. ## Simple solution: add image-filling algorithms, and convolve with some smoothing functions Will try adding a blur and another threshold to the segmentation. Simple Solution 1. Gaussian blur with FWHM 10, then threshold of 0.1 does not work well at all. 2. Before and after binary fills and convolving with a 20 pixel top-hat is OK, but very slow. Note that this also 'finds' some bits around the edges, which is undesierable.

In [34]:
blur = 21

import scipy.ndimage as ndi
img_data = images[1]

_, axes = get_fig_axes(len(images))

for n, img_data in enumerate(images):
    ax = axes[n // n_per_row, n % n_per_row]
    t_img = np.array(img_data < thresh, dtype=int)
    t_img = ndi.binary_fill_holes(t_img) # Fill small holes
    t_img = ndi.filters.convolve(t_img, np.ones((blur, blur)), origin=(blur // 2, blur // 2))
    t_img = ndi.binary_fill_holes(t_img) # Fill big holes
    ax.imshow(img_data, cmap=plt.cm.gray, interpolation='nearest')
    ax.contour(t_img, [0.5], linewidths=1.2, colors='y')
    ax.axis('off')

Region-based segmentation

This is a more sophisticated approach, using a combinatin of thresholds and edge detection from the skimage package. This is the approach implemented in the auto_detect_beamstop module.

The first step is to create an elevation map:

In [36]:
from skimage.filter import sobel

_, axes = get_fig_axes(len(images))

e_maps = []
for n, im in enumerate(images):
    ax = axes[n // n_per_row, n % n_per_row]
    elevation_map = sobel(im)
    e_maps.append(elevation_map)
    ax.imshow(elevation_map, cmap=plt.cm.jet, interpolation='nearest')
    ax.axis('off')
    ax.set_title('elevation_map')
    ax.axis('off')
    

Next, we create a 'mask' which has low and high thresholds for the beamstop and for the background.

In [37]:
_, axes = get_fig_axes(len(images))

markers_all = []

for n, im in enumerate(images):
    ax = axes[n // n_per_row, n % n_per_row]
    markers = np.zeros_like(im)
    markers[im < thresh] = 1
    markers[im > bg] = 2
    markers_all.append(markers)
    ax.imshow(markers, cmap=plt.cm.spectral, interpolation='nearest')
    ax.axis('off')
    ax.set_title('markers')

Finally, we use the watershed transform to fill regions of the elevation map starting from the markers determined above to create a segmentation. We fill small holes after applying the algorithm to smooth things.

In [38]:
from skimage import morphology
import scipy.ndimage as ndimage

_, axes = get_fig_axes(len(images))
segmentations = []
for n, meh in enumerate(zip(e_maps, markers_all)):
    ax = axes[n // n_per_row, n % n_per_row]
    segmentation = morphology.watershed(*meh)
    segmentation = ndimage.binary_fill_holes(segmentation - 1)
    segmentations.append(segmentation)
    ax.imshow(segmentation, cmap=plt.cm.gray, interpolation='nearest')
    ax.axis('off')
    ax.set_title('segmentation')

We can now visualize the final product using a contour at 0.5 (between 1 and 0 for beamstop and background)...

In [39]:
from skimage.color import label2rgb
_, axes = get_fig_axes(len(images))


for n, meh in enumerate(zip(segmentations, images)):    
    ax = axes[n // n_per_row, n % n_per_row]
    segmentation, im = meh
    labels_stop, _ = ndimage.label(segmentation)
    image_label_overlay = label2rgb(labels_stop, image=im)
    ax.imshow(im, cmap=plt.cm.gray, interpolation='nearest')
    ax.contour(segmentation, [0.5], linewidths=1.2, colors='y')
    ax.axis('off')

See the other notebook (demo_detect_beamstop) for a demp of the relevant python modules I wrote to implement this easily.